49 research outputs found
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Secure multi-party computation (MPC) allows parties to perform computations
on data while keeping that data private. This capability has great potential
for machine-learning applications: it facilitates training of machine-learning
models on private data sets owned by different parties, evaluation of one
party's private model using another party's private data, etc. Although a range
of studies implement machine-learning models via secure MPC, such
implementations are not yet mainstream. Adoption of secure MPC is hampered by
the absence of flexible software frameworks that "speak the language" of
machine-learning researchers and engineers. To foster adoption of secure MPC in
machine learning, we present CrypTen: a software framework that exposes popular
secure MPC primitives via abstractions that are common in modern
machine-learning frameworks, such as tensor computations, automatic
differentiation, and modular neural networks. This paper describes the design
of CrypTen and measure its performance on state-of-the-art models for text
classification, speech recognition, and image classification. Our benchmarks
show that CrypTen's GPU support and high-performance communication between (an
arbitrary number of) parties allows it to perform efficient private evaluation
of modern machine-learning models under a semi-honest threat model. For
example, two parties using CrypTen can securely predict phonemes in speech
recordings using Wav2Letter faster than real-time. We hope that CrypTen will
spur adoption of secure MPC in the machine-learning community
Theseus: A Library for Differentiable Nonlinear Optimization
We present Theseus, an efficient application-agnostic open source library for
differentiable nonlinear least squares (DNLS) optimization built on PyTorch,
providing a common framework for end-to-end structured learning in robotics and
vision. Existing DNLS implementations are application specific and do not
always incorporate many ingredients important for efficiency. Theseus is
application-agnostic, as we illustrate with several example applications that
are built using the same underlying differentiable components, such as
second-order optimizers, standard costs functions, and Lie groups. For
efficiency, Theseus incorporates support for sparse solvers, automatic
vectorization, batching, GPU acceleration, and gradient computation with
implicit differentiation and direct loss minimization. We do extensive
performance evaluation in a set of applications, demonstrating significant
efficiency gains and better scalability when these features are incorporated.
Project page: https://sites.google.com/view/theseus-a
Enabling event-triggered data plane monitoring
We propose a push-based approach to network monitoring that allows the detection, within the dataplane, of traffic aggregates. Notifications from the switch to the controller are sent only if required, avoiding the transmission or processing of unnecessary data. Furthermore, the dataplane iteratively refines the responsible IP prefixes, allowing the controller to receive information with a flexible granularity. We implemented our solution, Elastic Trie, in P4 and for two different FPGA devices. We evaluated it with packet traces from an ISP backbone. Our approach can spot changes in the traffic patterns and detect (with 95% of accuracy) either hierarchical heavy hitters with less than 8KB or superspreaders with less than 300KB of memory, respectively. Additionally, it reduces controller-dataplane communication overheads by up to two orders of magnitude with respect to state-of-the-art solutions